Anderson
Formalized Hopfield Networks and Boltzmann Machines
Cipollina, Matteo, Karatarakis, Michail, Wiedijk, Freek
Neural networks are widely used, yet their analysis and verification remain challenging. In this work, we present a Lean 4 formalization of neural networks, covering both deterministic and stochastic models. We first formalize Hopfield networks, recurrent networks that store patterns as stable states. We prove convergence and the correctness of Hebbian learning, a training rule that updates network parameters to encode patterns, here limited to the case of pairwise-orthogonal patterns. We then consider stochastic networks, where updates are probabilistic and convergence is to a stationary distribution. As a canonical example, we formalize the dynamics of Boltzmann machines and prove their ergodicity, showing convergence to a unique stationary distribution using a new formalization of the Perron-Frobenius theorem.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Netherlands > Gelderland > Nijmegen (0.04)
- Europe > Germany (0.04)
- (6 more...)
Leveraging LLMs for Design Ideation: An AI Tool to Assist Creativity
Kokate, Rutvik, Kompella, Pranati, Onkar, Prasad
The creative potential of computers has intrigued researchers for decades. Since the emergence of Generative AI (Gen AI), computer creativity has found many new dimensions and applications. As Gen AI permeates mainstream discourse and usage, researchers are delving into how it can improve and complement what humans do. Creative potential is a highly relevant notion to design practice and research, especially in the initial stages of ideation and conceptualisation. There is scope to improve creative potential in these stages, especially using machine intelligence. We propose a structured ideation session involving inspirational stimuli and utilise Gen AI in delivering this structure to designers through ALIA: Analogical LLM Ideation Agent, a tool for small-group ideation scenarios. The tool is developed by enabling speech based interactions with a Large Language Model (LLM) for inference generation. Inspiration is drawn from the synectic ideation method and the dialectics philosophy to design the optimal stimuli in group ideation. The tool is tested in design ideation sessions to compare the output of the AI-assisted ideation sessions to that of tradi tional ideation sessions. Preliminary findings showcase that participants have rated their ideas better when assisted by ALIA and respond favourably to speech-based interactions.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Indiana > Madison County > Anderson (0.04)
- (3 more...)
- Research Report > New Finding (0.68)
- Research Report > Promising Solution (0.46)
- North America > United States > Indiana > Madison County > Anderson (0.04)
- Asia > China > Guangdong Province (0.04)
- Asia > China > Anhui Province (0.04)
- Education > Educational Technology > Educational Software > Computer Based Training (1.00)
- Education > Educational Setting > Online (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
What does Elon Musk do with all his money?
What does Elon Musk do with all his money? Tesla boss Elon Musk has been one of the world's richest people for several years now, and that wealth recently went stratospheric when he became the first half-trillionaire. Despite this, Musk has insisted he leads a largely unglamorous lifestyle. He said in 2021 that he lived in a Texas home valued at $50,000 (£38,000). His former partner Grimes, with whom he has two children, told Vanity Fair in 2022 he does not live the extravagant life of excess luxury many assume.
- South America (0.15)
- North America > United States > Texas > Cameron County > Starbase (0.15)
- North America > Central America (0.15)
- (16 more...)
- Leisure & Entertainment (1.00)
- Media > Film (0.97)
- Aerospace & Defense (0.72)
Tree Search for LLM Agent Reinforcement Learning
Ji, Yuxiang, Ma, Ziyu, Wang, Yong, Chen, Guanhua, Chu, Xiangxiang, Wu, Liaoni
Recent advances in reinforcement learning (RL) have significantly enhanced the agentic capabilities of large language models (LLMs). In long-term and multi-turn agent tasks, existing approaches driven solely by outcome rewards often suffer from the problem of sparse supervision. To address the challenge, we propose Tree-based Group Relative Policy Optimization (Tree-GRPO), a grouped agent RL method based on tree search, where each tree node represents the complete agent interaction step. By sharing common prefixes, the tree search sampling increases the number of rollouts achievable within a fixed budget of tokens or tool calls. Moreover, we find that the tree-structured trajectory naturally allows the construction of step-wise process supervised signals even using only the outcome reward. Based on this, Tree-GRPO estimates the grouped relative advantages both on intra-tree and inter-tree levels. Through theoretical analysis, we demonstrate that the objective of intra-tree level group relative policy optimization is equivalent to that of step-level direct preference learning. Experiments across 11 datasets and 3 types of QA tasks demonstrate the superiority of the proposed tree-based RL over the chain-based RL method.Figure 1: Comparison of chain-based and tree-based sampling strategies in LLM multi-turn agent RL. The tree structure brings two major advantages: (i) less rollout budget (both on tokens and tool-calls); (ii) higher performance. Reinforcement Learning (RL) has emerged as a pivotal post-training paradigm for Large Language Models (LLMs), catalyzing the development of several frontier models (DeepSeek-AI Team, 2025; Y ang et al., 2025a; OpenAI, 2024). RL-tuned LLMs trained only with outcome rewards acquire complex reasoning abilities and achieve remarkable gains in single-turn tasks, such as mathematical proof and code generation (Team et al., 2025b; Y u et al., 2025; Chu et al., 2025a; Shao et al., 2024; Xin et al., 2024). This suggests that LLMs can learn not only through static imitation, but also by actively interacting with dynamic environments. Guided by this prospect, recent works have extended this RL paradigm to more complex agent settings involving dynamic, multi-turn interactions (Feng et al., 2025b; Singh et al., 2025; Wang et al., 2025b; Qian et al., 2025; Feng et al., Work done during internship at AMAP, Alibaba Group. Right (Ours): Tree search with nodes corresponding to complete agent step.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Southern Ocean (0.04)
- North America > Canada > Ontario > Lambton County > Sarnia (0.04)
- (11 more...)
- Leisure & Entertainment > Sports > Soccer (1.00)
- Media > Music (0.68)
- Leisure & Entertainment > Sports > Football (0.68)
- North America > United States > Indiana > Madison County > Anderson (0.04)
- Asia > China > Guangdong Province (0.04)
- Asia > China > Anhui Province (0.04)
- Education > Educational Technology > Educational Software > Computer Based Training (1.00)
- Education > Educational Setting > Online (1.00)
Real-Time Analysis of Unstructured Data with Machine Learning on Heterogeneous Architectures
As the particle physics community needs higher and higher precisions in order to test our current model of the subatomic world, larger and larger datasets are necessary. With upgrades scheduled for the detectors of colliding-beam experiments around the world, and specifically at the Large Hadron Collider at CERN, more collisions and more complex interactions are expected. This directly implies an increase in data produced and consequently in the computational resources needed to process them. At CERN, the amount of data produced is gargantuan. This is why the data have to be heavily filtered and selected in real time before being permanently stored. This data can then be used to perform physics analyses, in order to expand our current understanding of the universe and improve the Standard Model of physics. This real-time filtering, known as triggering, involves complex processing happening often at frequencies as high as 40 MHz. This thesis contributes to understanding how machine learning models can be efficiently deployed in such environments, in order to maximize throughput and minimize energy consumption. Inevitably, modern hardware designed for such tasks and contemporary algorithms are needed in order to meet the challenges posed by the stringent, high-frequency data rates. In this work, I present our graph neural network-based pipeline, developed for charged particle track reconstruction at the LHCb experiment at CERN. The pipeline was implemented end-to-end inside LHCb's first-level trigger, entirely on GPUs. Its performance was compared against the classical tracking algorithms currently in production at LHCb. The pipeline was also accelerated on the FPGA architecture, and its performance in terms of power consumption and processing speed was compared against the GPU implementation.
- Europe > Switzerland (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Illinois > Kane County > Batavia (0.04)
- (13 more...)
- Workflow (1.00)
- Research Report > New Finding (0.45)
- Transportation (1.00)
- Semiconductors & Electronics (1.00)
- Media (1.00)
- (8 more...)
Rethinking Over-Smoothing in Graph Neural Networks: A Perspective from Anderson Localization
Graph Neural Networks (GNNs) have shown great potential in graph data analysis due to their powerful representation capabilities. However, as the network depth increases, the issue of over-smoothing becomes more severe, causing node representations to lose their distinctiveness. This paper analyzes the mechanism of over-smoothing through the analogy to Anderson localization and introduces participation degree as a metric to quantify this phenomenon. Specifically, as the depth of the GNN increases, node features homogenize after multiple layers of message passing, leading to a loss of distinctiveness, similar to the behavior of vibration modes in disordered systems. In this context, over-smoothing in GNNs can be understood as the expansion of low-frequency modes (increased participation degree) and the localization of high-frequency modes (decreased participation degree). Based on this, we systematically reviewed the potential connection between the Anderson localization behavior in disordered systems and the over-smoothing behavior in Graph Neural Networks. A theoretical analysis was conducted, and we proposed the potential of alleviating over-smoothing by reducing the disorder in information propagation.
- North America > United States > Indiana > Madison County > Anderson (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.04)
Despite Protests, Elon Musk Secures Air Permit for xAI
A local health department in Memphis has granted Elon Musk's xAI data center an air permit to continue operating the gas turbines that power the company's Grok chatbot. The permit comes amid widespread community opposition and a looming lawsuit alleging the company violated the Clean Air Act. The Shelby County Health Department released its air permit for the xAI project Wednesday, after receiving hundreds of public comments. The news was first reported by the Daily Memphian. In June, the Memphis Chamber of Commerce announced that xAI had chosen a site in Memphis to build its new supercomputer.
- North America > United States > Tennessee (0.06)
- North America > United States > Indiana > Madison County > Anderson (0.06)
Weighted Assumption Based Argumentation to reason about ethical principles and actions
Baldi, Paolo, D'Asaro, Fabio Aurelio, Dyoub, Abeer, Lisi, Francesca Alessandra
We augment Assumption Based Argumentation (ABA for short) with weighted argumentation. In a nutshell, we assign weights to arguments and then derive the weight of attacks between ABA arguments. We illustrate our proposal through running examples in the field of ethical reasoning, and present an implementation based on Answer Set Programming.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (7 more...)
- Instructional Material (0.46)
- Research Report (0.40)
- Health & Medicine (1.00)
- Law (0.90)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.99)
- (2 more...)